Laplacian smoothing gradient descent
نویسندگان
چکیده
We propose a class of very simple modifications gradient descent and stochastic leveraging Laplacian smoothing. show that when applied to large variety machine learning problems, ranging from logistic regression deep neural nets, the proposed surrogates can dramatically reduce variance, allow take larger step size, improve generalization accuracy. The methods only involve multiplying usual (stochastic) by inverse positive definitive matrix (which be computed efficiently FFT) with low condition number coming one-dimensional discrete or its high-order generalizations. Given any vector, e.g., smoothing preserves mean increases smallest component decreases largest component. Moreover, we optimization algorithms these converge uniformly in Sobolev $$H_\sigma ^p$$ sense optimality gap for convex problems. code is available at: https://github.com/BaoWangMath/LaplacianSmoothing-GradientDescent .
منابع مشابه
Learning to learn by gradient descent by gradient descent
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorit...
متن کاملEmpirical Comparison of Gradient Descent andExponentiated Gradient Descent in
This report describes a series of results using the exponentiated gradient descent (EG) method recently proposed by Kivinen and Warmuth. Prior work is extended by comparing speed of learning on a nonstationary problem and on an extension to backpropagation networks. Most signi cantly, we present an extension of the EG method to temporal-di erence and reinforcement learning. This extension is co...
متن کاملAccelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Nesterov's accelerated gradient descent (AGD), an instance of the general family of"momentum methods", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stat...
متن کاملLearning to Learn without Gradient Descent by Gradient Descent
We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-paramete...
متن کاملImproved Laplacian Smoothing of Noisy Surface Meshes Improved Laplacian Smoothing of Noisy Surface Meshes
This paper presents a technique for smoothing polygonal surface meshes that avoids the well-known problem of deformation and shrinkage caused by many smoothing methods, like e.g. the Laplacian algorithm. The basic idea is to push the vertices of the smoothed mesh back towards their previous locations. This technique can be also used in order to smooth unstructured point sets, by reconstructing ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Research in the Mathematical Sciences
سال: 2022
ISSN: ['2522-0144', '2197-9847']
DOI: https://doi.org/10.1007/s40687-022-00351-1